perm filename WORK83.TEX[PRO,JMC] blob sn#722138 filedate 1983-07-31 generic text, type C, neo UTF8
COMMENT ⊗   VALID 00010 PAGES
C REC  PAGE   DESCRIPTION
C00001 00001
C00002 00002	\input basic
C00005 00003	\ctrline{\:M Basic Research}
C00007 00004	\sect Introduction.
C00009 00005	\sect Formal Reasoning.
C00021 00006	\sect Advice-taking ANALYST.
C00023 00007	\sect Formalisms for Reasoning about Programs.
C00028 00008	\sect EKL.
C00036 00009	\sect Lisp Performance Evaluation.
C00045 00010	\sect Automatic Construction of Special-purpose Programs.
C00069 ENDMK
C⊗;
\input basic
\jpar 1000
\varunit 1truein
\parskip .15in plus 5pt \baselineskip 15pt \lineskip 1pt
\def\rm{\:a} \def\sl{\:n} \def\bf{\:q} \def\it{\:?}
\font m=cmsc10 		%  Small caps
\def\sc{\:m}		%  Small Caps  (10pt)
\font N=cmdunh		%  Titles
\font M=cmr18 		%  Titles
\font <=cmb10		%  Bolder
\font t=cmtt9[tex,sys]
%\output{\advcount0\baselineskip 18pt\page}
\output{\baselineskip 18pt\page\ctrline{\curfont a\count0}\advcount0}
\def\sect#1.{\vskip 20pt plus20pt minus 7pt\penalty -100
   \secta#1.}
\def\secta#1.{\noindent{\bf \count1.\quad #1}\par\penalty 1000\advcount1\par
\vskip 5pt plus4pt minus 3pt\penalty 1000}

\setcount1 1
\setcount0 1

\def\ssect#1.{\yyskip{\bf\penalty -50\hbox{\vbox{\hbox{#1}}}
\penalty 800}} 
\def \ni{\noindent}

\def\textindent#1{\noindent\hbox to 19pt{\hskip 0pt plus 1000pt minus 1000pt#1\ 
\!}}
\def\hang{\hangindent 19pt}

\def\numitem#1{\yskip\hang\textindent{#1}}

\def\ref:#1.#2.{[#1;#2]}

\def	\bcode	{\par\vfil\penalty400\vfilneg\parskip 0pt plus 1 pt\vskip .25 in\: t\beginpassasis}
\def	\ecode  {\endpassasis\: a\par\vskip .05 in\vfil\penalty500\vfilneg\parskip .1 in plus 5pt}

\input pai[can,rpg]

\def\eat#1\endeat{}	% macro for extended comments
\def\makeref:#1.#2.#3{\par{\noindent \parshape 2 0pt 353pt
20pt 333pt \hbox{[#1;#2]}\linebreak 
{#3}}\par\parshape 0\vfil\penalty 0\vfilneg}
\def\article#1{{\sl #1}}
\def\book#1{{\:< ``#1''}}
\def\yyskip{\penalty-100\vskip8pt plus6pt minus4pt}
\def\yskip{\penalty-50\vskip 3pt plus 3pt minus 2pt}
\def\bslskp{\baselineskip 15pt}
\ctrline{\:M Basic Research}
\vskip .2truein
\ctrline{\:M in}
\vskip .2truein
\ctrline{\:M Artificial Intelligence}
\vskip .2truein
\ctrline{\:M and}
\vskip .2truein
\ctrline{\:M Formal Reasoning}

\vfill

\noindent {\bf Personnel}: John\ McCarthy, Lewis\ Creary, Richard\ Gabriel, 
Christopher\ Goad,
Jussi\ Ketonen, Carolyn\ Talcott, Yoram\ Moses, Joseph\ Weening

\vfill\eject
\sect Introduction.

This document is a statement of work submitted to the Defense Advanced
Research Projects Agency.  Each section presents a schedule of milestones
for a particular project.  It is assumed that the reader has some
familiarity with the current proposal.

\sect Formal Reasoning.

\ssect Objectives.

	McCarthy will continue his work on formalizing commonsense
reasoning during the period of the proposal.  The following will be
emphasized.

\ni 1. Inventory of commonsense representation, commonsense knowledge
and commonsense reasoning human capabilities.

There is widespread agreement that lack of general commonsense knowledge
is the key present limitation on the applicability of AI programs.
However, each of the authors making this point (for example: Newell, Nilsson,
Minsky, Genesereth and McCarthy) has had to content himself with giving
examples.  The time seems to be ripe to attempt a comprehensive list.
McCarthy will prepare a paper giving such a list during 1983 and plans to
use this paper as a basis for discussion with other AI researchers.  It is
extremely unlikely that the initial list will be accepted as
comprehensive, and we plan further editions over the three-year period of
the contract.  Insofar as our list is comprehensive, individual AI
researchers can concentrate their attention on specific aspects of commonsense
knowledge or reasoning.

\ni 2. Formalization of additional commonsense concepts.  A
key problem that has been around for many years is that of formalizing
the consequences of actions where concurrent events are taking
place.  There are some new ideas, and Yoram Moses is exploring
writing a thesis in this area.  So far as we know, no existing AI programs
can handle this at all.

	The simple blocks world formalisms have long suffered from
the `frame problem' of specifying what doesn't change when an action
takes place.  McCarthy has developed a formalization using circumscription
that avoids requiring that the description of an action describe
what doesn't change.  He will try to elaborate this into a general
solution of the frame problem.

\ni 3. Non-monotonic reasoning.  Circumscription has mathematical
properties related to the notion of satisfaction in mathematical logic.
McCarthy has shown (unpublished) that in the propositional case determining
the models resulting from circumscribing a formula leads to a tree
of satisfaction problems.  The useful case of predicate circumscription
should lead to a generalization of the satisfaction problem.  Progress
probably depends on attracting the attention of mathematical logicians
to the problem.  Yoram Moses has started on some aspects of it.

	McCarthy has identified two important properties of the human intellect
that have not been put in computer programs.  We call them `ambiguity
tolerance' and `elaboration tolerance,' and they can both be treated by
circumscription.  (Ambiguity tolerance was mentioned vaguely by Dreyfus
in his book ``What Computers Can't Do'' as something that computers
can't do).

	`Ambiguity tolerance' refers to the fact that humans successfully
use concepts that are subsequently shown to be ambiguous.  Even after
the ambiguity is understood a person uses the ambiguous form of the
concept in situations in which the ambiguity doesn't show up.
If all concepts used by AI programs were required to be unambiguous, it
would mean that all possible philosophy would have to be done before
any AI.  Therefore, AI programs must use ambiguous concepts.  It seems
that circumscription or other non-monotonic reasoning can be used to
allow the inference that a concept is unambiguous in a particular
instance.

`Elaboration tolerance' is a similar phenomenon.  We can use a
representation in which the location of a university, or even a building, is
considered independent of the situation.  Nevertheless, if it is necessary
to consider elaborating the representation in order to consider moving the
institution, this can be done.  Again circumscription can be used to make
this capability available to programs.

\ni 4. Identifying intellectual mechanisms.

Besides those based on non-monotonic reasoning, human intelligence
embodies many mechanisms not yet incorporated in formal systems.  McCarthy
has uncovered some of these and plans to study them.  Here are examples:

\numitem{a.}Common Business Communication Language

\numitem{}McCarthy (1982) explores the requirements for business communication
between programs belonging to different organizations.  For example, a
purchasing program belonging to one company might inquire about the price
and delivery of some product and place an order.  Similarly a program
belonging to one military organization might inquire about the readiness
of airplanes or the availability of certain spare parts.  With a suitable
authorization it might issue a requisition.  When this problem was first
considered it was supposed that it was just a standardization
problem, but further examination shows that it involves formalizing
the semantics (not syntax) of a substantial and interesting
fragment of natural language.  The present languages used for
communication between AI programs and their users are inadequate
to express the concepts required.  The internal languages so far used
in AI are also inadequate.

\numitem{b.}Modifying programs without reading them.

\numitem{}Modifying the behavior of a computer program usually requires a detailed
study of the relevant part of the program and the data structures it uses.
Much less is required in communicating with a human.  For example, the
head of an airline might tell his subordinates that Zoroastrian
passengers should receive a complementary drink if they fly on Zoroaster's
birthday.  He does not have to mention or even know how records are
kept---whether in the head of ticket agent for a very small airline, in
ledgers for an old-fashioned airline, or in a computer.  The reason he can
do this is that he is allowed to refer to external events---not just
inputs---in expressing his wishes.  McCarthy has noticed that with
suitable programming languages computers can also be instructed without
understanding their internal workings.  The Elephant language, developed
by McCarthy, obviates the need to mention many data structures, because
the user can refer to past events without referring to any data structure
that remembers past events.  We plan to explore these possibilities
further.


\numitem{c.}Reification.

\numitem{}Human thought and language often makes nouns of verbs
and adjectives.  Some examples are: think $\→$ thought, red $\→$ redness,
to attack $\→$ attack.  Philosophers call this `to reify,' i.e.
`to make a thing out of' from the Latin word `re' meaning `thing.'
They have mainly been concerned with cases in which reification
leads to error.

\numitem{}However, reification seems to be an important intellectual
process and AI programs will have to do it.  For instance, the question:
`How many things are still wrong with the boat now that you have fixed
some of them?'  requires the reification of `things that are still wrong'
and `things that you have fixed.'  We plan to study reification and to
formalize some AI examples in which it is useful.

\sect Advice-taking ANALYST.

\ssect Objectives  (March 1983--December 1986).

\ni 1. Further detailed epistemological studies of the commonsense reasoning
involved in planning and plan recognition.

\ni 2. Detailed study of the heuristic knowledge employed in commonsense
reasoning, planning, and plan recognition.

\ni 3. Completion, testing, and further refinement of the NFLT matcher
mentioned in accomplishment 3.

\ni 4. Further development of and experimentation with the commonsense reasoning
component of ANALYST.

\ni 5. Completion, testing, and further development of the plan-producing
component of ANALYST.

\ni 6. Design and implementation of a non-trivial memory-indexing scheme for the
ANALYST (the current NFLT discrimination net is not intended for use in
information-retrieval).

\ni 7. Transfer of the ANALYST code to a Symbolics 3600 Lisp Machine (for increased
address space and speed).

\ni 8. Implementation of an ANALYST capability to recognize an agent's plans,
given his goals and behavior.

\ni 9. Implementation of an ANALYST capability to infer an agent's plans and
goals, given his behavior.

\ni 10. Implementation of a non-trivial advice-taking capability for the ANALYST.

\sect Formalisms for Reasoning about Programs.

\ssect Objectives  (March 1983--December 1986).

\ni 1. We will continue the project of identifying, formalizing, and
proving interesting and useful properties of typical programs.
In particular we will develop theories of finite sets and finite functions,
and apply these theories to proving properties of programs.

\ni 2. We will define notions of equivalence useful for proving properties
of PFN's and use these definitions in developing the theory of PFN's.

\ni 3.  We will develop mathematical properties of algebraic structures with
multiargument/multivalue operations (cart algebras) and identify some
useful notions of homomorphism.  The results will be applied to
representation of PFN's and to proving properties about PFN's and
programs.  It will also be applied to ideas for data structure definition
using cart algebras.

\ni 4. We will define encodings of computation structures as data 
and use these to explain macros, metafunctions, and dynamic aspects
of computation systems.

\ni 5. We will continue work on our ideas for using shapes to facilitate
reasoning about PFN's.  In particular:

\numitem{i.}we will derive rules for proving properties of well-shaped
PFN's;

\numitem{ii.}we will use shapes to identify a class of `hereditarily consistent'
PFN's \ref:Platek.1966. for which a generalization of McCarthy's
minimization scheme holds; and

\numitem{iii.}we will use rational shapes (the Y combinator is well-shaped) to treat
self-applicative PFN's, streams, and other interesting PFN's.

\ni 6.  We will further develop methods for proving properties of memory structures.
There is a simple extension to our basic model that allows treatment of
programs operating on such structures.  What is needed is to develop
methods that make it practical and useful to prove properties
of these programs.

\ni 7.  We will formalize both the general methods for producing derived
programs and the computation defined by these programs to provide means for
stating and proving the `correctness' of derived programs.

We will identify further properties to be computed by derived programs and
methods for deriving them.  We will investigate the method of specializing 
an interpreter as a general method for producing derived programs.

\ni 8.  We will continue work on defining a formal system for reasoning about
PFN's.  We will define a basic formal system and enrich the collection of
available formal notions via definitions and derived principles so that
the work of formalization can be carried out in a direct and natural
manner.

We will also identify principles for extending the formal system based on
the formal metatheory.

\vfill\eject
\sect EKL.

\ssect Objectives.

During the next two years Ketonen plans to:

\ni 1. apply the techniques for proof checking that he has
developed to verifying the correctness of the various parsing algorithms
that are currently in fashion.

\ni 2. develop more powerful proof manipulation techniques in order
to study transformations of programs and proofs.  This methodology will then be
used to extract computational information out of correctness proofs of
programs and other mathematical facts.

\ni 3. continue his research on logical algorithms.  The paper of Ketonen and
Weyhrauch on a natural, decidable fragment of predicate calculus has been
submitted for publication.  Many interesting problems remain: for example,
the proper use of equalities in the decision procedure mentioned above and
its correct implementation. In implementing a new version of this decision
procedure Ketonen intends to use and refine the notion of postponement
developed by McCarthy.

\ni 4. extend the EKL rewriter further; an important future topic is the use
and elimination of definitions. Another direction for Ketonen's research
is the use of high-order unification in rewriting: Ketonen has shown that
the high-order unification algorithm of Huet terminates when the unifiable
variables come from only one side; a situation that occurs naturally in
rewriting.  As is well known, the termination problem for high-order
unification is unsolvable in general.

\sect Lisp Performance Evaluation.

\sect Objectives.

\ni 1. Complete the current suite of benchmarks on each of the implementations
mentioned.

\ni 2. Obtain the set of results and collate them into the final report, with
interpretations of their significance, to be submitted in December 1983.

\ni 3. Several other invesigators have mentioned that they have noticed that
the sum total of results of a number of small benchmarks can conflict with
the results a larger benchmark. It is proposed that a follow-on project be
undertaken to investigate the predicative power of small benchmarks on the
performance of a large program on a given Lisp implementation.

\ni 4. While translating these benchmarks to InterLisp, it was noted by expert 
InterLisp programmers that the programming style of the 21 benchmarks
described above is contrary to good InterLisp programming style, and that
the D-Series computers and their InterLisps have been tuned to this
other style. This brings up the question of how programming style and performance
of a Lisp system interact. For instance, is it the case that a performance
profile for a Lisp system implies a style profile? Or is it the case that a
style profile causes the implementors to tune a performance profile.
we propose to study these issues.

\ni 5. After the above measurements have been made, new Lisp
implementations will continue to be developed. A low-resource benchmarking
effort with periodic updates to the benchmark data will be maintained
through 1986.

\ni 6. The S-1 MarkIIA uni- and multi-processor will be benchmarked, both
with respect to Lisp performance and with respect to overall performance
as compared with the DEC 2060 and other currently used processors. We will
draw on this set of Lisp benchmarks and any standard benchmarks for other
languages that may exist. This will be completed and a report submitted to
DARPA by December 1984.

\sect Automatic Construction of Special-purpose Programs.

<<<Jussi: this has to be made into bullets, as in the \ni stuff before>>>

\ssect Objectives.

We are proposing to continue our work in special-purpose automatic
programming for a collection of related applications in three-dimensional
computer graphics and computer vision.  The work proposed here has two
aims:  first, to produce faster programs than any currently available for
specific practically important problems in computer graphics and vision by
the use of special-purpose automatic programming, and, second, to further
develop general tools for special-purpose automatic programming.

The first priority is to complete the work on vision described in the
objectives section above.  
In addition we propose to extend our work in vision in the following directions.

The first might be called `model-based low-level vision.'  The low-level
functions that we are using in our current vision work are generic
edge-finding operators of a conventional kind; they are based on local
operators whose intent is to detect evidence of discontinuities in the
(discrete) intensity surface.  No use is made of the values of the
intensity function except at presumed discontinuities.  However, in model-based
vision application a great deal of detailed information is
available concerning the object to be recognized beyond a simple
specification of where its edges lie.  The reflectivity function of the
object surfaces can be determined, as can the surface geometry between
edges.  Further, object edges often have a microstructure which produces
something more complicated than a simple discontinuity in the image
surface (for example, it is not uncommon to have a very narrow specular
reflection running along an edge).  Finally, in industrial situations
prior information is typically available about lighting conditions.  All
of this specializing information can be exploited to produce more
effective special-purpose vision programs; vision programs which rely on
special rather than generic edge operators, and which make use of shading
information as well as edge information.

The second area involves making use of weaker---rather than
stronger---specializing information.  In the work we have done so far we
have assumed that the exact shape of the object to displayed or recognized
is available in advance, so that programs adapted to that particular shape
can be constructed.  Often, however, less specific, but nonetheless useful
information is available in advance.  For example, in computer graphics,
many applications involve displaying a number of rigid objects, which move
relative to each other, so that there are more sources of variation at
run-time than the position of the viewer.  In computer vision, it is
useful to consider constructing special-purpose programs for classes of
objects such as the class of all airplanes (see \ref:Brooks.1981.), rather than
for specific objects.  Advance information may also take the form of
geometric constraints which apply to the situation at hand.  For example,
it might be known in advance that one object will rest on top of another,
without specific information being available about the position of either
object.  We would like to be able to exploit arbitrary advance knowledge
expressed by geometric constraints.  This aim is closely related to a
central aim in expert systems or knowledge engineering work, in which one
seeks to separate declarative information (which nonetheless has
algorithmic relevance) from direct algorithmic content.  Our strategy for
realizing this aim in the domains of graphics and vision is to develop
methods for compiling declarative information (constraints) into efficient
special-purpose programs.  With such methods available the user is able
to express his knowledge about individual situations declaratively; the
extraction of algorithmic content is automated.  This contrasts with the
usual knowledge engineering approach, which involves separating the
`knowledge' (expressed as rules) from the use of that knowledge by a
uniform inference procedure.  Acronym \ref:Brooks.1981.. implements the latter
approach for the vision domain in that geometric rules and constraints are
handled by fully general-purpose mechanisms.  Partly as a consequence of
this architecture, Acronym is at present far too slow for most practical
vision applications.  In any case, we plan to investigate the ways in
which our tools apply in situations where specializing information takes
the form of geometric constraints.

The third area involves exploiting continuity of motion in tracking moving
objects (vision) and in displaying moving objects (graphics).  If images
are repeatedly acquired or generated at short time intervals for
configurations which move in a continuous manner, then it is normally not
necessary to start each computation from scratch.  Instead, relevant
information may be retained from one computation (frame) to the next and
updated to take the (minor) changes between frames into account.  When the
shape of the object being tracked or displayed is known in advance, the
natural formulation of the special-purpose programming task is this: one
wishes to automatically generate a special-purpose program for performing
the inter-frame update computation for the object in question.  The
selection of the information to be retained from one frame to another may
also be automated.  In attacking this problem, we can draw on the recent
work of Paige \ref:Paige.1982. on domain-independent methods for
automatically generating update computations (Paige refers to this process
as `formal differentiation').

\vfill\end